Network Newton–Part I: Algorithm and Convergence
نویسندگان
چکیده
We study the problem of minimizing a sum of convex objective functions where the components of the objective are available at different nodes of a network and nodes are allowed to only communicate with their neighbors. The use of distributed gradient methods is a common approach to solve this problem. Their popularity notwithstanding, these methods exhibit slow convergence and a consequent large number of communications between nodes to approach the optimal argument because they rely on first order information only. This paper proposes the network Newton (NN) method as a distributed algorithm that incorporates second order information. This is done via distributed implementation of approximations of a suitably chosen Newton step. The approximations are obtained by truncation of the Newton step’s Taylor expansion. This leads to a family of methods defined by the number K of Taylor series terms kept in the approximation. When keeping K terms of the Taylor series, the method is called NN-K and can be implemented through the aggregation of information in K-hop neighborhoods. Convergence to a point close to the optimal argument at a rate that is at least linear is proven and the existence of a tradeoff between convergence time and the distance to the optimal argument is shown. Convergence rate, several practical implementation matters, and numerical analyses are presented in a companion paper [3].
منابع مشابه
Network Newton–Part II: Convergence Rate and Implementation
The use of network Newton methods for the decentralized optimization of a sum cost distributed through agents of a network is considered. Network Newton methods reinterpret distributed gradient descent as a penalty method, observe that the corresponding Hessian is sparse, and approximate the Newton step by truncating a Taylor expansion of the inverse Hessian. Truncating the series at K terms yi...
متن کاملA Distributed Newton Method for Network Utility Maximization-I: Algorithm
Most existing works use dual decomposition and first-order methods to solve Network Utility Maximization (NUM) problems in a distributed manner, which suffer from slow rate of convergence properties. This paper develops an alternative distributed Newtontype fast converging algorithm for solving NUM problems. By using novel matrix splitting techniques, both primal and dual updates for the Newton...
متن کاملQuasi-Newton Methods for Nonconvex Constrained Multiobjective Optimization
Here, a quasi-Newton algorithm for constrained multiobjective optimization is proposed. Under suitable assumptions, global convergence of the algorithm is established.
متن کاملNumerical solution of fuzzy linear Fredholm integro-differential equation by \fuzzy neural network
In this paper, a novel hybrid method based on learning algorithmof fuzzy neural network and Newton-Cotesmethods with positive coefficient for the solution of linear Fredholm integro-differential equation of the second kindwith fuzzy initial value is presented. Here neural network isconsidered as a part of large field called neural computing orsoft computing. We propose alearning algorithm from ...
متن کاملDistributed Newton-type Algorithms for Network Resource Allocation
Most of today's communication networks are large-scale and comprise of agents with local information and heterogeneous preferences, making centralized control and coordination impractical. This motivated much interest in developing and studying distributed algorithms for network resource allocation problems, such as Internet routing, data collection and processing in sensor networks, and cross-...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2015